Step-size estimation for unconstrained optimization methods

نویسندگان
چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Step-size Estimation for Unconstrained Optimization Methods

Some computable schemes for descent methods without line search are proposed. Convergence properties are presented. Numerical experiments concerning large scale unconstrained minimization problems are reported. Mathematical subject classification: 90C30, 65K05, 49M37.

متن کامل

Line search methods with variable sample size for unconstrained optimization

Minimization of unconstrained objective function in the form of mathematical expectation is considered. Sample Average Approximation SAA method transforms the expectation objective function into a real-valued deterministic function using large sample and thus deals with deterministic function minimization. The main drawback of this approach is its cost. A large sample of the random variable tha...

متن کامل

Enriched Methods for Large-Scale Unconstrained Optimization

This paper describes a class of optimization methods that interlace iterations of the limited memory BFGS method L BFGS and a Hessian free Newton method HFN in such a way that the information collected by one type of iteration improves the performance of the other Curvature information about the objective function is stored in the form of a limited memory matrix and plays the dual role of preco...

متن کامل

Tensor Methods for Large, Sparse Unconstrained Optimization

Tensor methods for unconstrained optimization were rst introduced by Schn-abel and Chow SIAM J. Optimization, 1 (1991), pp. 293-315], who describe these methods for small to moderate-size problems. The major contribution of this paper is the extension of these methods to large, sparse unconstrained optimization problems. This extension requires an entirely new way of solving the tensor model th...

متن کامل

On the Behavior of Damped Quasi-Newton Methods for Unconstrained Optimization

We consider a family of damped quasi-Newton methods for solving unconstrained optimization problems. This family resembles that of Broyden with line searches, except that the change in gradients is replaced by a certain hybrid vector before updating the current Hessian approximation. This damped technique modifies the Hessian approximations so that they are maintained sufficiently positive defi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Computational & Applied Mathematics

سال: 2005

ISSN: 0101-8205

DOI: 10.1590/s0101-82052005000300005